33 research outputs found

    Long time asymptotics of a Brownian particle coupled with a random environment with non-diffusive feedback force

    Get PDF
    We study the long time behavior of a Brownian particle moving in an anomalously diffusing field, the evolution of which depends on the particle position. We prove that the process describing the asymptotic behaviour of the Brownian particle has bounded (in time) variance when the particle interacts with a subdiffusive field; when the interaction is with a superdiffusive field the variance of the limiting process grows in time as t^{2{\gamma}-1}, 1/2 < {\gamma} < 1. Two different kinds of superdiffusing (random) environments are considered: one is described through the use of the fractional Laplacian; the other via the Riemann-Liouville fractional integral. The subdiffusive field is modeled through the Riemann-Liouville fractional derivative.Comment: 45 page

    Asymptotic Analysis for Markovian models in non-equilibrium Statistical Mechanics

    No full text
    This thesis is mainly concerned with the problem of exponential convergence to equilibrium for open classical systems. We consider a model of a small Hamiltonian system coupled to a heat reservoir, which is described by the Generalized Langevin Equation (GLE) and we focus on a class of Markovian approximations to the GLE. The generator of these Markovian dynamics is an hypoelliptic non-selfadjoint operator. We look at the problem of exponential convergence to equilibrium by using and comparing three different approaches: classic ergodic theory, hypocoercivity theory and semiclassical analysis (singular space theory). In particular, we describe a technique to easily determine the spectrum of quadratic hypoelliptic operators (which are in general non-selfadjoint) and hence obtain the exact rate of convergence to equilibrium

    Bounding stationary averages of polynomial diffusions via semidefinite programming

    Get PDF
    We introduce an algorithm based on semidefinite programming that yields increasing (resp. decreasing) sequences of lower (resp. upper) bounds on polynomial stationary averages of diffusions with polynomial drift vector and diffusion coefficients. The bounds are obtained by optimising an objective, determined by the stationary average of interest, over the set of real vectors defined by certain linear equalities and semidefinite inequalities which are satisfied by the moments of any stationary measure of the diffusion. We exemplify the use of the approach through several applications: a Bayesian inference problem; the computation of Lyapunov exponents of linear ordinary differential equations perturbed by multiplicative white noise; and a reliability problem from structural mechanics. Additionally, we prove that the bounds converge to the infimum and supremum of the set of stationary averages for certain SDEs associated with the computation of the Lyapunov exponents, and we provide numerical evidence of convergence in more general settings

    Uniform in time convergence of numerical schemes for stochastic differential equations via Strong Exponential stability: Euler methods, Split-Step and Tamed Schemes

    Full text link
    We prove a general criterion providing sufficient conditions under which a time-discretiziation of a given Stochastic Differential Equation (SDE) is a uniform in time approximation of the SDE. The criterion is also, to a certain extent, discussed in the paper, necessary. Using such a criterion we then analyse the convergence properties of numerical methods for solutions of SDEs; we consider Explicit and Implicit Euler, split-step and (truncated) tamed Euler methods. In particular, we show that, under mild conditions on the coefficients of the SDE (locally Lipschitz and strictly monotonic), these methods produce approximations of the law of the solution of the SDE that converge uniformly in time. The theoretical results are verified by numerical examples.Comment: 50 pages, 2 figure

    Diffusion Limit for the Random Walk Metropolis Algorithm out of stationarity

    Get PDF

    Non-stationary phase of the MALA algorithm

    Get PDF
    The Metropolis-Adjusted Langevin Algorithm (MALA) is a Markov Chain Monte Carlo method which creates a Markov chain reversible with respect to a given target distribution, pi^N, with Lebesgue density on R^N; it can hence be used to approximately sample the target distribution. When the dimension N is large a key question is to determine the computational cost of the algorithm as a function of N. One approach to this question, which we adopt here, is to derive diffusion limits for the algorithm. The family of target measures that we consider in this paper are, in general, in non-product form and are of interest in applied problems as they arise in Bayesian nonparametric statistics and in the study of conditioned diffusions. Furthermore, we study the situation, which arises in practice, where the algorithm is started out of stationarity. We thereby significantly extend previous works which consider either only measures of product form, when the Markov chain is started out of stationarity, or measures defined via a density with respect to a Gaussian, when the Markov chain is started in stationarity. We prove that, in the non-stationary regime, the computational cost of the algorithm is of the order N^(1/2) with dimension, as opposed to what is known to happen in the stationary regime, where the cost is of the order N^(1/3).Comment: 37 pages. arXiv admin note: text overlap with arXiv:1405.489

    Markovian approximation of classical open systems

    Get PDF

    Optimal scaling of the MALA algorithm with irreversible proposals for Gaussian targets

    Get PDF
    It is well known in many settings that reversible Langevin diffusions in confining potentials converge to equilibrium exponentially fast. Adding irreversible perturbations to the drift of a Langevin diffusion that maintain the same invariant measure accelerates its convergence to stationarity. Many existing works thus advocate the use of such non-reversible dynamics for sampling. When implementing Markov Chain Monte Carlo algorithms (MCMC) using time discretisations of such Stochastic Differential Equations (SDEs), one can append the discretization with the usual Metropolis-Hastings accept-reject step and this is often done in practice because the accept--reject step eliminates bias. On the other hand, such a step makes the resulting chain reversible. It is not known whether adding the accept-reject step preserves the faster mixing properties of the non-reversible dynamics. In this paper, we address this gap between theory and practice by analyzing the optimal scaling of MCMC algorithms constructed from proposal moves that are time-step Euler discretisations of an irreversible SDE, for high dimensional Gaussian target measures. We call the resulting algorithm the \imala, in comparison to the classical MALA algorithm (here {\em ip} is for irreversible proposal). In order to quantify how the cost of the algorithm scales with the dimension NN, we prove invariance principles for the appropriately rescaled chain. In contrast to the usual MALA algorithm, we show that there could be two regimes asymptotically: (i) a diffusive regime, as in the MALA algorithm and (ii) a ``fluid" regime where the limit is an ordinary differential equation. We provide concrete examples where the limit is a diffusion, as in the standard MALA, but with provably higher limiting acceptance probabilities. Numerical results are also given corroborating the theory

    A Function Space HMC Algorithm With Second Order Langevin Diffusion Limit

    Get PDF
    We describe a new MCMC method optimized for the sampling of probability measures on Hilbert space which have a density with respect to a Gaussian; such measures arise in the Bayesian approach to inverse problems, and in conditioned diffusions. Our algorithm is based on two key design principles: (i) algorithms which are well-defined in infinite dimensions result in methods which do not suffer from the curse of dimensionality when they are applied to approximations of the infinite dimensional target measure on \bbR^N; (ii) non-reversible algorithms can have better mixing properties compared to their reversible counterparts. The method we introduce is based on the hybrid Monte Carlo algorithm, tailored to incorporate these two design principles. The main result of this paper states that the new algorithm, appropriately rescaled, converges weakly to a second order Langevin diffusion on Hilbert space; as a consequence the algorithm explores the approximate target measures on \bbR^N in a number of steps which is independent of NN. We also present the underlying theory for the limiting non-reversible diffusion on Hilbert space, including characterization of the invariant measure, and we describe numerical simulations demonstrating that the proposed method has favourable mixing properties as an MCMC algorithm.Comment: 41 pages, 2 figures. This is the final version, with more comments and an extra appendix adde

    Diffusion Limit for the Random Walk Metropolis Algorithm out of stationarity

    Get PDF
    The Random Walk Metropolis (RWM) algorithm is a Metropolis–Hastings Markov Chain Monte Carlo algorithm designed to sample from a given target distribution π^N with Lebesgue density on R^N. Like any other Metropolis–Hastings algorithm, RWM constructs a Markov chain by randomly proposing a new position (the “proposal move”), which is then accepted or rejected according to a rule which makes the chain reversible with respect to π^N. When the dimension N is large, a key question is to determine the optimal scaling with N of the proposal variance: if the proposal variance is too large, the algorithm will reject the proposed moves too often; if it is too small, the algorithm will explore the state space too slowly. Determining the optimal scaling of the proposal variance gives a measure of the cost of the algorithm as well. One approach to tackle this issue, which we adopt here, is to derive diffusion limits for the algorithm. Such an approach has been proposed in the seminal papers (Ann. Appl. Probab. 7 (1) (1997) 110–120; J. R. Stat. Soc. Ser. B. Stat. Methodol. 60 (1) (1998) 255–268). In particular, in (Ann. Appl. Probab. 7 (1) (1997) 110–120) the authors derive a diffusion limit for the RWM algorithm under the two following assumptions: (i) the algorithm is started in stationarity; (ii) the target measure π^N is in product form. The present paper considers the situation of practical interest in which both assumptions (i) and (ii) are removed. That is (a) we study the case (which occurs in practice) in which the algorithm is started out of stationarity and (b) we consider target measures which are in non-product form. Roughly speaking, we consider target measures that admit a density with respect to Gaussian; such measures arise in Bayesian nonparametric statistics and in the study of conditioned diffusions. We prove that, out of stationarity, the optimal scaling for the proposal variance is O(N^(−1)), as it is in stationarity. In this optimal scaling, a diffusion limit is obtained and the cost of reaching and exploring the invariant measure scales as O(N). Notice that the optimal scaling in and out of stationatity need not be the same in general, and indeed they differ e.g. in the case of the MALA algorithm (Stoch. Partial Differ. Equ. Anal Comput. 6 (3) (2018) 446–499). More importantly, our diffusion limit is given by a stochastic PDE, coupled to a scalar ordinary differential equation; such an ODE gives a measure of how far from stationarity the process is and can therefore be taken as an indicator of convergence. In this sense, this paper contributes understanding to the old-standing problem of monitoring convergence of MCMC algorithms
    corecore